134 research outputs found

    Optimal Geo-Indistinguishable Mechanisms for Location Privacy

    Full text link
    We consider the geo-indistinguishability approach to location privacy, and the trade-off with respect to utility. We show that, given a desired degree of geo-indistinguishability, it is possible to construct a mechanism that minimizes the service quality loss, using linear programming techniques. In addition we show that, under certain conditions, such mechanism also provides optimal privacy in the sense of Shokri et al. Furthermore, we propose a method to reduce the number of constraints of the linear program from cubic to quadratic, maintaining the privacy guarantees and without affecting significantly the utility of the generated mechanism. This reduces considerably the time required to solve the linear program, thus enlarging significantly the location sets for which the optimal mechanisms can be computed.Comment: 13 page

    Trust in Crowds: probabilistic behaviour in anonymity protocols

    No full text
    The existing analysis of the Crowds anonymity protocol assumes that a participating member is either ‘honest’ or ‘corrupted’. This paper generalises this analysis so that each member is assumed to maliciously disclose the identity of other nodes with a probability determined by her vulnerability to corruption. Within this model, the trust in a principal is defined to be the probability that she behaves honestly. We investigate the effect of such a probabilistic behaviour on the anonymity of the principals participating in the protocol, and formulate the necessary conditions to achieve ‘probable innocence’. Using these conditions, we propose a generalised Crowds-Trust protocol which uses trust information to achieves ‘probable innocence’ for principals exhibiting probabilistic behaviour

    Compositionality Results for Quantitative Information Flow

    Get PDF
    International audienceIn the min-entropy approach to quantitative information flow, the leakage is defined in terms of a minimization problem, which, in case of large systems, can be computationally rather heavy. The same happens for the recently proposed generalization called g-vulnerability. In this paper we study the case in which the channel associated to the system can be decomposed into simpler channels, which typically happens when the observables consist of several components. Our main contribution is the derivation of bounds on the g-leakage of the whole system in terms of the g-leakage of its components

    Quantitative information flow, with a view

    Get PDF
    We put forward a general model intended for assessment of system security against passive eavesdroppers, both quantitatively ( how much information is leaked) and qualitatively ( what properties are leaked). To this purpose, we extend information hiding systems ( ihs ), a model where the secret-observable relation is represented as a noisy channel, with views : basically, partitions of the state-space. Given a view W and n independent observations of the system, one is interested in the probability that a Bayesian adversary wrongly predicts the class of W the underlying secret belongs to. We offer results that allow one to easily characterise the behaviour of this error probability as a function of the number of observations, in terms of the channel matrices defining the ihs and the view W . In particular, we provide expressions for the limit value as n → ∞, show by tight bounds that convergence is exponential, and also characterise the rate of convergence to predefined error thresholds. We then show a few instances of statistical attacks that can be assessed by a direct application of our model: attacks against modular exponentiation that exploit timing leaks, against anonymity in mix-nets and against privacy in sparse datasets

    Asymptotic information leakage under one-try attacks

    Get PDF
    We study the asymptotic behaviour of (a) information leakage and (b) adversary’s error probability in information hiding systems modelled as noisy channels. Specifically, we assume the attacker can make a single guess after observing n independent executions of the system, throughout which the secret information is kept fixed. We show that the asymptotic behaviour of quantities (a) and (b) can be determined in a simple way from the channel matrix. Moreover, simple and tight bounds on them as functions of n show that the convergence is exponential. We also discuss feasible methods to evaluate the rate of convergence. Our results cover both the Bayesian case, where a prior probability distribution on the secrets is assumed known to the attacker, and the maximum-likelihood case, where the attacker does not know such distribution. In the Bayesian case, we identify the distributions that maximize the leakage. We consider both the min-entropy setting studied by Smith and the additive form recently proposed by Braun et al., and show the two forms do agree asymptotically. Next, we extend these results to a more sophisticated eavesdropping scenario, where the attacker can perform a (noisy) observation at each state of the computation and the systems are modelled as hidden Markov models

    Quantitative information flow under generic leakage functions and adaptive adversaries

    Full text link
    We put forward a model of action-based randomization mechanisms to analyse quantitative information flow (QIF) under generic leakage functions, and under possibly adaptive adversaries. This model subsumes many of the QIF models proposed so far. Our main contributions include the following: (1) we identify mild general conditions on the leakage function under which it is possible to derive general and significant results on adaptive QIF; (2) we contrast the efficiency of adaptive and non-adaptive strategies, showing that the latter are as efficient as the former in terms of length up to an expansion factor bounded by the number of available actions; (3) we show that the maximum information leakage over strategies, given a finite time horizon, can be expressed in terms of a Bellman equation. This can be used to compute an optimal finite strategy recursively, by resorting to standard methods like backward induction.Comment: Revised and extended version of conference paper with the same title appeared in Proc. of FORTE 2014, LNC

    Generalized differential privacy: regions of priors that admit robust optimal mechanisms

    Get PDF
    International audienceDifferential privacy is a notion of privacy that was initially designed for statistical databases, and has been recently extended to a more general class of domains. Both differential privacy and its generalized version can be achieved by adding random noise to the reported data. Thus, privacy is obtained at the cost of reducing the data's accuracy, and therefore their utility. In this paper we consider the problem of identifying optimal mechanisms for gen- eralized differential privacy, i.e. mechanisms that maximize the utility for a given level of privacy. The utility usually depends on a prior distribution of the data, and naturally it would be desirable to design mechanisms that are universally optimal, i.e., optimal for all priors. However it is already known that such mechanisms do not exist in general. We then characterize maximal classes of priors for which a mechanism which is optimal for all the priors of the class does exist. We show that such classes can be defined as convex polytopes in the priors space. As an application, we consider the problem of privacy that arises when using, for instance, location-based services, and we show how to define mechanisms that maximize the quality of service while preserving the desired level of geo- indistinguishability

    Protecting Locations with Differential Privacy under Temporal Correlations

    Full text link
    Concerns on location privacy frequently arise with the rapid development of GPS enabled devices and location-based applications. While spatial transformation techniques such as location perturbation or generalization have been studied extensively, most techniques rely on syntactic privacy models without rigorous privacy guarantee. Many of them only consider static scenarios or perturb the location at single timestamps without considering temporal correlations of a moving user's locations, and hence are vulnerable to various inference attacks. While differential privacy has been accepted as a standard for privacy protection, applying differential privacy in location based applications presents new challenges, as the protection needs to be enforced on the fly for a single user and needs to incorporate temporal correlations between a user's locations. In this paper, we propose a systematic solution to preserve location privacy with rigorous privacy guarantee. First, we propose a new definition, "δ\delta-location set" based differential privacy, to account for the temporal correlations in location data. Second, we show that the well known 1\ell_1-norm sensitivity fails to capture the geometric sensitivity in multidimensional space and propose a new notion, sensitivity hull, based on which the error of differential privacy is bounded. Third, to obtain the optimal utility we present a planar isotropic mechanism (PIM) for location perturbation, which is the first mechanism achieving the lower bound of differential privacy. Experiments on real-world datasets also demonstrate that PIM significantly outperforms baseline approaches in data utility.Comment: Final version Nov-04-201

    Compositional closure for Bayes Risk in probabilistic noninterference

    Full text link
    We give a sequential model for noninterference security including probability (but not demonic choice), thus supporting reasoning about the likelihood that high-security values might be revealed by observations of low-security activity. Our novel methodological contribution is the definition of a refinement order and its use to compare security measures between specifications and (their supposed) implementations. This contrasts with the more common practice of evaluating the security of individual programs in isolation. The appropriateness of our model and order is supported by our showing that our refinement order is the greatest compositional relation --the compositional closure-- with respect to our semantics and an "elementary" order based on Bayes Risk --- a security measure already in widespread use. We also relate refinement to other measures such as Shannon Entropy. By applying the approach to a non-trivial example, the anonymous-majority Three-Judges protocol, we demonstrate by example that correctness arguments can be simplified by the sort of layered developments --through levels of increasing detail-- that are allowed and encouraged by compositional semantics

    Probable Innocence and Independent Knowledge

    Get PDF
    International audienceWe analyse the \textsc{Crowds} anonymity protocol under the novel assumption that the attacker has independent knowledge on behavioural patterns of individual users. Under such conditions we study, reformulate and extend Reiter and Rubin's notion of probable innocence, and provide a new formalisation for it based on the concept of protocol vulnerability. Accordingly, we establish new formal relationships between protocol parameters and attackers' knowledge expressing necessary and sufficient conditions to ensure probable innocence
    corecore